23 research outputs found

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    A Framework for Realistic 3D Tele-Immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identified and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems

    A Framework for Realistic 3D Tele-Immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identied and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems

    Towards image-based modelling, editing and rendering

    No full text
    Capturing and rendering real world objects and scenes with high visual quality has been one of the main topics in Computer Vision and Graphics in the last decades. Often,we do not only wish to display the captured content but also modify it, interact with it and experience it in an immersive way. Classic Computer Graphics modelling and rendering provides the freedom of modification and animation, but often lacks visual realism, especially if real-time constraints should be met. Also, modelling often involves enormous manual work and rendering and animation require costly physical simulations. Recent approaches directly capture real world objects and scenes and infer object characteristics from the captured data.Furthermore, more and more image-based modeling techniques have been developed in order to meet both the requirements of realistic appearance and animation, especially for complex objects. This is where Computer Vision and Graphics meet. In this talk, I will give an overview on ongoing works covering the whole processing chain from capturing, image and video analysis and understanding to image-based modelling, editing and rendering

    Optical flow based tracking and retexturing of garments

    No full text
    In this paper, we present a method for tracking and retexturing of garments that exploits the entire, image information using the optical flow constraint instead of working with distinct features. In a hierarchical framework we refine the motion model with every level. The motion model is used to regularize the optical flow field such that finding the best transformation amounts in minimizing an error function that can be solved in a least squares sense. Knowledge about the position and deformation of the garment in 2D allows us to erase the old texture and rep ace it by a new one wish correct deformation and shading properties without 3D reconstruction. Additionally, it provides an estimation of the inadiance such that the new texture can be illuminated realistically

    DEFORMABLE OBJECT TRACKING USING OPTICAL FLOW CONSTRAINTS

    No full text
    In this paper, we present a method for deformable object tracking that exploits the entire image information using the optical flow equation instead of working with discrete feature points. Our method starts from the optical flow constraint and first estimates global transformations in a hierarchical framework. Elastic deformations are then estimated separately using deformable meshes and spatial and temporal smoothing constraints. In both cases additional constraints to regularize the optical flow field are obtained from the motion model such that finding the best transformation amounts in minimizing an error function that can be solved in a least squares sense. Combing deformable meshes and the optical flow equation with a dedicated weighted smoothness constraint on the mesh deformation and estimating global transformations separately from elastic deformations is key to dealing with complex deformations such as cloth deformation as a person moves.

    Darstellung komplexer und/oder deformierbarer Objekte sowie virtuelle Anprobe von anziehbaren Objekten

    No full text
    DE 102009036029 A1 UPAB: 20110223 NOVELTY - The device has a stroke unit (12) for stroke on the basis of given scene condition, in a data base (16). An adjustment unit (14) is provided for producing a view under adaptation of another view to the given scene condition. A texturing unit (62) is provided, which is formed on basis of the texture map illustration information of the object, so that the appearance of the object corresponds in the former view to a given texture map. DETAILED DESCRIPTION - INDEPENDENT CLAIMS are also included for the following: (1) a device for virtual fitting of objects portable at the body; (2) a method for virtual fitting of objects portable at the body; and (3) a method for representation of a complex or deformable object on a given scene condition. USE - Device for representation of a complex or deformable object on a given scene condition. ADVANTAGE - The texturing unit is provided, which is formed on basis of the texture map illustration information of the object, so that the appearance of the object corresponds in the former view to a given texture map, and hence ensures representation of a complex or deformable object on a given scene condition

    Deshaking endoscopic video for kymography

    No full text
    The opening and closing of the vocal folds (plica vocalis) at high frequencies is a major source of sound in human speech. Videokymography [Svec and Schutte 1995] is a technique for visualizing the motion of the vocal folds for medical diagnosis: The vibrating folds are filmed with an endoscopic camera pointed into the larynx. The camera records at a high framerate to capture vocal fold vibration (see fig. 1 for example frames). The kymogram used for medical diagnosis is a time-slice image, i.e. an X-t-cut through the X-Y-t image cube of the endoscopic video (fig. 2). The quality and diagnostic interpretability of a kymogram deteriorates significantly if the camera moves relative to the scene as this motion interferes with the vibratory motion of the vocal fold in the kymogram. Therefore, we propose an approach to stabilizing the motion of endoscopic video for kymography

    Patch-based reconstruction and rendering of human heads

    No full text
    Reconstructing the 3D shape of human faces is an intensively researched topic. Most approaches aim at generating a closed surface representation of geometry, i.e. a mesh, which is texture-mapped for rendering. However, if free viewpoint rendering is the primary purpose of the reconstruction, representations other than meshes are possible. In this paper a coarse patch-based approach to both reconstruction and rendering is explored and applied not only to the face but the whole human head. The approach has advantages on parts of the scene that are traditionally difficult to reconstruct and render, which is the case for hair when it comes to human heads. In the paper, reconstruction of a patch is posed as a parameter estimation problem which is solved in a generic image-based optimization framework using the Levenberg-Marquard algorithm. In order to improve robustness, the Huber error metric is used and a geometric regularization strategy is introduced. Initial values for the optimization, which are crucial for the method's success, are obtained by triangulation of SIFT feature points and a recursive expansion scheme

    Provider Bulletin

    No full text
    RE: MassHealth Essential to Cover Visual Analysis by Optometrist
    corecore